23 research outputs found

    On the calculation of the linear complexity of periodic sequences

    Get PDF
    Based on a result of Hao Chen in 2006 we present a general procedure how to reduce the determination of the linear complexity of a sequence over a finite field \F_q of period unun to the determination of the linear complexities of uu sequences over \F_q of period nn. We apply this procedure to some classes of periodic sequences over a finite field \F_q obtaining efficient algorithms to determine the linear complexity

    Atom Search Optimization with Deep Learning Enabled Arabic Sign Language Recognition for Speaking and Hearing Disability Persons

    No full text
    Sign language has played a crucial role in the lives of impaired people having hearing and speaking disabilities. They can send messages via hand gesture movement. Arabic Sign Language (ASL) recognition is a very difficult task because of its high complexity and the increasing intraclass similarity. Sign language may be utilized for the communication of sentences, letters, or words using diverse signs of the hands. Such communication helps to bridge the communication gap between people with hearing impairment and other people and also makes it easy for people with hearing impairment to express their opinions. Recently, a large number of studies have been ongoing in developing a system that is capable of classifying signs of dissimilar sign languages into the given class. Therefore, this study designs an atom search optimization with a deep convolutional autoencoder-enabled sign language recognition (ASODCAE-SLR) model for speaking and hearing disabled persons. The presented ASODCAE-SLR technique mainly aims to assist the communication of speaking and hearing disabled persons via the SLR process. To accomplish this, the ASODCAE-SLR technique initially pre-processes the input frames by a weighted average filtering approach. In addition, the ASODCAE-SLR technique employs a capsule network (CapsNet) feature extractor to produce a collection of feature vectors. For the recognition of sign language, the DCAE model is exploited in the study. At the final stage, the ASO algorithm is utilized as a hyperparameter optimizer which in turn increases the efficacy of the DCAE model. The experimental validation of the ASODCAE-SLR model is tested using the Arabic Sign Language dataset. The simulation analysis exhibit the enhanced performance of the ASODCAE-SLR model compared to existing models

    Squirrel Search Optimization with Deep Transfer Learning-Enabled Crop Classification Model on Hyperspectral Remote Sensing Imagery

    No full text
    With recent advances in remote sensing image acquisition and the increasing availability of fine spectral and spatial information, hyperspectral remote sensing images (HSI) have received considerable attention in several application areas such as agriculture, environment, forestry, and mineral mapping, etc. HSIs have become an essential method for distinguishing crop classes and accomplishing growth information monitoring for precision agriculture, depending upon the fine spectral response to the crop attributes. The recent advances in computer vision (CV) and deep learning (DL) models allow for the effective identification and classification of different crop types on HSIs. This article introduces a novel squirrel search optimization with a deep transfer learning-enabled crop classification (SSODTL-CC) model on HSIs. The proposed SSODTL-CC model intends to identify the crop type in HSIs properly. To accomplish this, the proposed SSODTL-CC model initially derives a MobileNet with an Adam optimizer for the feature extraction process. In addition, an SSO algorithm with a bidirectional long-short term memory (BiLSTM) model is employed for crop type classification. To demonstrate the better performance of the SSODTL-CC model, a wide-ranging experimental analysis is performed on two benchmark datasets, namely dataset-1 (WHU-Hi-LongKou) and dataset-2 (WHU-Hi-HanChuan). The comparative analysis pointed out the better outcomes of the SSODTL-CC model over other models with a maximum of 99.23% and 97.15% on test datasets 1 and 2, respectively

    Squirrel Search Optimization with Deep Transfer Learning-Enabled Crop Classification Model on Hyperspectral Remote Sensing Imagery

    No full text
    With recent advances in remote sensing image acquisition and the increasing availability of fine spectral and spatial information, hyperspectral remote sensing images (HSI) have received considerable attention in several application areas such as agriculture, environment, forestry, and mineral mapping, etc. HSIs have become an essential method for distinguishing crop classes and accomplishing growth information monitoring for precision agriculture, depending upon the fine spectral response to the crop attributes. The recent advances in computer vision (CV) and deep learning (DL) models allow for the effective identification and classification of different crop types on HSIs. This article introduces a novel squirrel search optimization with a deep transfer learning-enabled crop classification (SSODTL-CC) model on HSIs. The proposed SSODTL-CC model intends to identify the crop type in HSIs properly. To accomplish this, the proposed SSODTL-CC model initially derives a MobileNet with an Adam optimizer for the feature extraction process. In addition, an SSO algorithm with a bidirectional long-short term memory (BiLSTM) model is employed for crop type classification. To demonstrate the better performance of the SSODTL-CC model, a wide-ranging experimental analysis is performed on two benchmark datasets, namely dataset-1 (WHU-Hi-LongKou) and dataset-2 (WHU-Hi-HanChuan). The comparative analysis pointed out the better outcomes of the SSODTL-CC model over other models with a maximum of 99.23% and 97.15% on test datasets 1 and 2, respectively

    Modeling of Botnet Detection Using Chaotic Binary Pelican Optimization Algorithm With Deep Learning on Internet of Things Environment

    No full text
    Nowadays, there are ample amounts of Internet of Things (IoT) devices interconnected to the networks, and with technological improvement, cyberattacks and security threads, for example, botnets, are rapidly evolving and emerging with high-risk attacks. A botnet is a network of compromised devices that are controlled by cyber attackers, frequently employed to perform different cyberattacks. Such attack disrupts IoT evolution by disrupting services and networks for IoT devices. Detecting botnets in an IoT environment includes finding abnormal patterns or behaviors that might indicate the existence of these malicious networks. Several researchers have proposed deep learning (DL) and machine learning (ML) approaches for identifying and categorizing botnet attacks in the IoT platform. Therefore, this study introduces a Botnet Detection using the Chaotic Binary Pelican Optimization Algorithm with Deep Learning (BNT-CBPOADL) technique in the IoT environment. The main aim of the BNT-CBPOADL method lies in the correct detection and categorization of botnet attacks in the IoT environment. In the BNT-CBPOADL method, Z-score normalization is applied for pre-processing. Besides, the CBPOA technique is derived for feature selection. The convolutional variational autoencoder (CVAE) method is applied for botnet detection. At last, the arithmetical optimization algorithm (AOA) is employed for the optimal hyperparameter tuning of the CVAE algorithm. The experimental valuation of the BNT-CBPOADL technique is tested on a Bot-IoT database. The experimentation outcomes inferred the supremacy of the BNT-CBPOADL method over other existing techniques with maximum accuracy of 99.50%

    Enhanced Pelican Optimization Algorithm with Deep Learning-Driven Mitotic Nuclei Classification on Breast Histopathology Images

    No full text
    Breast cancer (BC) is a prevalent disease worldwide, and accurate diagnoses are vital for successful treatment. Histopathological (HI) inspection, particularly the detection of mitotic nuclei, has played a pivotal function in the prognosis and diagnosis of BC. It includes the detection and classification of mitotic nuclei within breast tissue samples. Conventionally, the detection of mitotic nuclei has been a subjective task and is time-consuming for pathologists to perform manually. Automatic classification using computer algorithms, especially deep learning (DL) algorithms, has been developed as a beneficial alternative. DL and CNNs particularly have shown outstanding performance in different image classification tasks, including mitotic nuclei classification. CNNs can learn intricate hierarchical features from HI images, making them suitable for detecting subtle patterns related to the mitotic nuclei. In this article, we present an Enhanced Pelican Optimization Algorithm with a Deep Learning-Driven Mitotic Nuclei Classification (EPOADL-MNC) technique on Breast HI. This developed EPOADL-MNC system examines the histopathology images for the classification of mitotic and non-mitotic cells. In this presented EPOADL-MNC technique, the ShuffleNet model can be employed for the feature extraction method. In the hyperparameter tuning procedure, the EPOADL-MNC algorithm makes use of the EPOA system to alter the hyperparameters of the ShuffleNet model. Finally, we used an adaptive neuro-fuzzy inference system (ANFIS) for the classification and detection of mitotic cell nuclei on histopathology images. A series of simulations took place to validate the improved detection performance of the EPOADL-MNC technique. The comprehensive outcomes highlighted the better outcomes of the EPOADL-MNC algorithm compared to existing DL techniques with a maximum accuracy of 97.83%

    Computational Intelligence with Wild Horse Optimization Based Object Recognition and Classification Model for Autonomous Driving Systems

    No full text
    Presently, autonomous systems have gained considerable attention in several fields such as transportation, healthcare, autonomous driving, logistics, etc. It is highly needed to ensure the safe operations of the autonomous system before launching it to the general public. Since the design of a completely autonomous system is a challenging process, perception and decision-making act as vital parts. The effective detection of objects on the road under varying scenarios can considerably enhance the safety of autonomous driving. The recently developed computational intelligence (CI) and deep learning models help to effectively design the object detection algorithms for environment perception depending upon the camera system that exists in the autonomous driving systems. With this motivation, this study designed a novel computational intelligence with a wild horse optimization-based object recognition and classification (CIWHO-ORC) model for autonomous driving systems. The proposed CIWHO-ORC technique intends to effectively identify the presence of multiple static and dynamic objects such as vehicles, pedestrians, signboards, etc. Additionally, the CIWHO-ORC technique involves the design of a krill herd (KH) algorithm with a multi-scale Faster RCNN model for the detection of objects. In addition, a wild horse optimizer (WHO) with an online sequential ridge regression (OSRR) model was applied for the classification of recognized objects. The experimental analysis of the CIWHO-ORC technique is validated using benchmark datasets, and the obtained results demonstrate the promising outcome of the CIWHO-ORC technique in terms of several measures

    Modified Earthworm Optimization With Deep Learning Assisted Emotion Recognition for Human Computer Interface

    No full text
    Among the most prominent field in the human-computer interface (HCI) is emotion recognition using facial expressions. Posed variations, facial accessories, and non-uniform illuminations are some of the difficulties in the emotion recognition field. Emotion detection with the help of traditional methods has the shortcoming of mutual optimization of feature extraction and classification. Computer vision (CV) technology improves HCI by visualizing the natural world in a digital platform like the human brain. In CV technique, advances in machine learning and artificial intelligence result in further enhancements and changes, which ensures an improved and more stable visualization. This study develops a new Modified Earthworm Optimization with Deep Learning Assisted Emotion Recognition (MEWODL-ER) for HCI applications. The presented MEWODL-ER technique intends to categorize different kinds of emotions that exist in the HCI applications. To do so, the presented MEWODL-ER technique employs the GoogleNet model to extract feature vectors and the hyperparameter tuning process is performed via the MEWO algorithm. The design of automated hyperparameter adjustment using the MEWO algorithm helps in attaining an improved emotion recognition process. Finally, the quantum autoencoder (QAE) model is implemented for the identification and classification of emotions related to the HCI applications. To exhibit the enhanced recognition results of the MEWODL-ER approach, a wide-ranging simulation analysis is performed. The experimental values indicated that the MEWODL-ER technique accomplishes promising performance over other models with maximum accuracy of 98.91%

    Intracranial Haemorrhage Diagnosis Using Willow Catkin Optimization With Voting Ensemble Deep Learning on CT Brain Imaging

    No full text
    Intracranial haemorrhage (ICH) has become a critical healthcare emergency that needs accurate assessment and earlier diagnosis. Due to the high rates of mortality (about 40%), the early classification and detection of diseases through computed tomography (CT) images were needed to guarantee a better prognosis and control the occurrence of neurologic deficiencies. Generally, in the earlier diagnoses test for severe ICH, CT imaging of the brain was implemented in the emergency department. Meanwhile, manual diagnoses are labour-intensive, and automatic ICH recognition and classification techniques utilizing artificial intelligence (AI) models are needed. Therefore, the study presents an Intracranial Haemorrhage Diagnosis using Willow Catkin Optimization with Voting Ensemble (ICHD-WCOVE) Model on CT images. The presented ICHD-WCOVE technique exploits computer vision and ensemble learning techniques for automated ICH classification. The presented ICHD-WCOVE technique involves the design of a multi-head attention-based CNN (MAFNet) model for feature vector generation with optimal hyperparameter tuning using the WCO algorithm. For automated ICH detection and classification, the majority voting ensemble deep learning (MVEDL) technique is used, which comprises recurrent neural network (RNN), Bi-directional long short-term memory (BiLSTM), and extreme learning machine-stacked autoencoder (ELM-SAE). The experimental analysis of the ICHD-WCOVE approach can be tested by a medical dataset and the outcomes signified the betterment of the ICHD-WCOVE technique over other existing approaches
    corecore